9 research outputs found

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

    Get PDF
    Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method

    Appearance modelling and reconstruction for navigation in minimally invasive surgery

    No full text
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Utilizing confocal laser endomicroscopy for evaluating the adequacy of laparoscopic liver ablation

    Get PDF
    BACKGROUND: Laparoscopic liver ablation therapy can be used for the treatment of primary and secondary liver malignancy. The increased incidence of cancer recurrence associated with this approach, has been attributed to the inability of monitoring the extent of ablated liver tissue. METHODS: The feasibility of assessing liver ablation with probe‐based confocal laser endomicroscopy (CLE) was studied in a porcine model of laparoscopic microwave liver ablation. Following the intravenous injection of the fluorophores fluorescein and indocyanine green, CLE images were recorded at 488 nm and 660 nm wavelength and compared to liver histology. Statistical analysis was performed to assess if fluorescence intensity change can predict the presence of ablated liver tissue. RESULTS: CLE imaging of fluorescein at 488 nm provided good visualization of the hepatic microvasculature; whereas, CLE imaging of indocyanine green at 660 nm enabled detailed visualization of hepatic sinusoid architecture and interlobular septations. Fluorescence intensity as measured in relative fluorescence units was found to be 75–100% lower in ablated compared to healthy liver regions. General linear mixed modeling and ROC analysis found the decrease in fluorescence to be statistically significant. CONCLUSION: Laparoscopic, dual wavelength CLE imaging using two different fluorophores enables clinically useful visualization of multiple liver tissue compartments, in greater detail than is possible at a single wavelength. CLE imaging may provide valuable intraoperative information on the extent of laparoscopic liver ablation. Lasers Surg. Med. 48:299–310, 2016. © 2015 The Authors. Lasers in Surgery and Medicine Published by Wiley Periodicals, Inc

    Locally rigid, vessel-based registration for laparoscopic liver surgery

    Get PDF
    PURPOSE: Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet is difficult for most lesions due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but is challenging in a soft deforming organ such as the liver. In this paper, we therefore propose a laparoscopic ultrasound (LUS) image guidance system and study the feasibility of a locally rigid registration for laparoscopic liver surgery. METHODS: We developed a real-time segmentation method to extract vessel centre points from calibrated, freehand, electromagnetically tracked, 2D LUS images. Using landmark-based initial registration and an optional iterative closest point (ICP) point-to-line registration, a vessel centre-line model extracted from preoperative computed tomography (CT) is registered to the ultrasound data during surgery. RESULTS: Using the locally rigid ICP method, the RMS residual error when registering to a phantom was 0.7 mm, and the mean target registration error (TRE) for two in vivo porcine studies was 3.58 and 2.99 mm, respectively. Using the locally rigid landmark-based registration method gave a mean TRE of 4.23 mm using vessel centre lines derived from CT scans taken with pneumoperitoneum and 6.57 mm without pneumoperitoneum. CONCLUSION: In this paper we propose a practical image-guided surgery system based on locally rigid registration of a CT-derived model to vascular structures located with LUS. In a physical phantom and during porcine laparoscopic liver resection, we demonstrate accuracy of target location commensurate with surgical requirements. We conclude that locally rigid registration could be sufficient for practically useful image guidance in the near future

    Accuracy validation of an image guided laparoscopy system for liver resection

    Get PDF
    We present an analysis of the registration component of a proposed image guidance system for image guided liver surgery, using contrast enhanced CT. The analysis is performed on a visually realistic liver phantom and in-vivo porcine data. A robust registration process that can be deployed clinically is a key component of any image guided surgery system. It is also essential that the accuracy of the registration can be quantified and communicated to the surgeon. We summarise the proposed guidance system and discuss its clinical feasibility. The registration combines an intuitive manual alignment stage, surface reconstruction from a tracked stereo laparoscope and a rigid iterative closest point registration to register the intra-operative liver surface to the liver surface derived from CT. Testing of the system on a liver phantom shows that subsurface landmarks can be localised to an accuracy of 2.9 mm RMS. Testing during five porcine liver surgeries demonstrated that registration can be performed during surgery, with an error of less than 10 mm RMS for multiple surface landmarks
    corecore